AMID: Approximation of MultI-measured Data using SVD
نویسندگان
چکیده
Approximate query answering has recently emerged as an effective method for generating a viable answer. Among various techniques for approximate query answering, wavelets have received a lot of attention. However, wavelet techniques minimizing the root squared error (i.e., the L2 norm error) have several problems such as the poor quality of reconstructed data when the original data is biased. In this paper, we present AMID (Approximation of MultI-measured Data using SVD) for multi-measured data. In AMID, we adapt the singular value decomposition (SVD) to compress multi-measured data. We show that SVD guarantees the root squared error, and also drive an error bound of SVD for an individual data value, using mathematical analyses. In addition, in order to improve the accuracy of approximated data, we combine SVD and wavelets in AMID. Since SVD is applied to a fixed matrix, we use various properties of matrices to adapt SVD to the incremental update environment. We devise two variants of AMID for the incremental update environment: incremental AMID and local AMID. To the best of our knowledge, our work is the first to extend SVD to incremental update environments. 2009 Elsevier Inc. All rights reserved.
منابع مشابه
Large-Scale Nyström Kernel Matrix Approximation Using Randomized SVD
The Nyström method is an efficient technique for the eigenvalue decomposition of large kernel matrices. However, to ensure an accurate approximation, a sufficient number of columns have to be sampled. On very large data sets, the singular value decomposition (SVD) step on the resultant data submatrix can quickly dominate the computations and become prohibitive. In this paper, we propose an accu...
متن کاملMulti-Level Cluster Indicator Decompositions of Matrices and Tensors
A main challenging problem for many machine learning and data mining applications is that the amount of data and features are very large, so that low-rank approximations of original data are often required for efficient computation. We propose new multi-level clustering based low-rank matrix approximations which are comparable and even more compact than Singular Value Decomposition (SVD). We ut...
متن کاملSingular Value Decomposition and High-Dimensional Data
A data set with n measurements on p variables can be represented by an n × p data matrix X. In highdimensional settings where p is large, it is often desirable to work with a low-rank approximation to the data matrix. The most prevalent low-rank approximation is the singular value decomposition (SVD). Given X, an n × p data matrix, the SVD factorizes X as X = UDV ′, where U ∈ Rn×n and V ∈ Rp×p ...
متن کاملThe Singular Value Decomposition, Applications and Beyond
The singular value decomposition (SVD) is not only a classical theory in matrix computation and analysis, but also is a powerful tool in machine learning and modern data analysis. In this tutorial we first study the basic notion of SVD and then show the central role of SVD in matrices. Using majorization theory, we consider variational principles of singular values and eigenvalues. Built on SVD...
متن کاملApproximation-free running SVD and its application to motion detection
In different tasks such as adaptive background modelling, the Singular Value Decomposition (SVD) has to be applied in running fashion. Typically, this happens when the SVD is used in a sliding spatial or temporal data window. Each time the window moves on, the SVD should be calculated in the batch mode from scratch, or re-calculated using the previous solution. When the data matrix is relativel...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Inf. Sci.
دوره 179 شماره
صفحات -
تاریخ انتشار 2009